-
Notifications
You must be signed in to change notification settings - Fork 48
Cooperative Vector API #384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
7c65d4b
to
cd67909
Compare
89499a0
to
0247195
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review part 1 (three big files left to review)
One thing that will come up when we add the hash grid encoding, but good to keep in mind in general: atomic addition of |
7b38710
to
dd7995a
Compare
Dr.Jit-Core always generates the f16x2 assembly operation, even when only scattering a single Right now, packet atomics are ignored by the CUDA backend. I think that Blackwell is the first consumer architecture that really supports these besides the f16x2 special case. In any case, such changes are out of scope for this already very big PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Went over the remaining files! Not many more comments this time.
I didn't really understand the changes in loop.cpp
, I trust that they make sense :)
0d875c1
to
6c23bc7
Compare
This commit improves handling of evaluated loops with grad-enabled state variables. Previously, the AD variable ID of each differentiable state variable changed in every iteration, even if the loop did not touch that variable. This is an implementation detail of the loop evaluation code, that should, however, not leak into user code. This commit fixes this behavior.
This commit fixes bugs in the compilation of reverse-mode derivatives of simple loops (i.e, loops with max_iterations==-1) and updates the test suite to cover problematic cases.
Cooperative vectors enable efficient compilation and evaluation of expressions involving matrix multiplication. They cater to a specific use case, where each execution thread performs a sequence of independent multiplications by reasonably small matrices (e.g., 64x64). This enables the fully fused evaluation of small multilayer perceptrons within a larger program. That said, the feature isn't specific to MLPs and could also be used in other ways. On NVIDIA GPUs (Turing or newer), cooperative vectors map to the OptiX cooperative vector API leveraging the builtin tensor core for acceleration. On the CPU (LLVM) backend, Dr.Jit compiles cooperative vector operations using available instruction set extensions (AVX512, NEON, etc.). For further details on this new API and now to use it, refer to the documentation in ``docs/coop_vec.rst``. Fix derivative of ``nn.matmul()`` in simple symbolic loops This commit fixes bugs and adds tests to ensure that matrix multiplication can be correctly differentiated in reverse-mode when it occurs inside a "simple" loop (i.e., a loop with max_iterations==-1).
This feature adds cooperative vector support to Dr.Jit. They enable efficient compilation and evaluation of expressions involving matrix multiplication and cater to situations where each execution thread performs a sequence of independent multiplications by reasonably small matrices (e.g., 64x64). This enables the fully fused evaluation of small multilayer perceptrons within a larger program. That said, the feature isn't specific to MLPs and could also be used in other ways.
On NVIDIA GPUs (Turing or newer), cooperative vectors map to the OptiX cooperative vector API leveraging the builtin tensor core for acceleration. On the CPU (LLVM) backend, Dr.Jit compiles cooperative vector operations using available instruction set extensions (AVX512, NEON, etc.).
For further details on this new API and now to use it, refer to the documentation: